首页> 外文OA文献 >Empirical entropy, minimax regret and minimax risk
【2h】

Empirical entropy, minimax regret and minimax risk

机译:经验熵,极小极大遗憾和极小极大风险

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

We consider the random design regression model with square loss. We propose amethod that aggregates empirical minimizers (ERM) over appropriately chosenrandom subsets and reduces to ERM in the extreme case, and we establish sharporacle inequalities for its risk. We show that, under the $\varepsilon^{-p}$growth of the empirical $\varepsilon$-entropy, the excess risk of the proposedmethod attains the rate $n^{-2/(2+p)}$ for $p\in(0,2)$ and $n^{-1/p}$ for $p>2$where $n$ is the sample size. Furthermore, for $p\in(0,2)$, the excess riskrate matches the behavior of the minimax risk of function estimation inregression problems under the well-specified model. This yields a conclusionthat the rates of statistical estimation in well-specified models (minimaxrisk) and in misspecified models (minimax regret) are equivalent in the regime$p\in(0,2)$. In other words, for $p\in(0,2)$ the problem of statisticallearning enjoys the same minimax rate as the problem of statistical estimation.On the contrary, for $p>2$ we show that the rates of the minimax regret are, ingeneral, slower than for the minimax risk. Our oracle inequalities also implythe $v\log(n/v)/n$ rates for Vapnik-Chervonenkis type classes of dimension $v$without the usual convexity assumption on the class; we show that these ratesare optimal. Finally, for a slightly modified method, we derive a bound on theexcess risk of $s$-sparse convex aggregation improving that of Lounici [Math.Methods Statist. 16 (2007) 246-259] and providing the optimal rate.
机译:我们考虑具有平方损失的随机设计回归模型。我们提出了一种方法,该方法在适当选择的随机子集上聚合经验最小化子(ERM),并在极端情况下降低为ERM,并为其风险建立了尖锐的Oracle不等式。我们表明,在经验性\\ varepsilon $-熵的$ \ varepsilon ^ {-p} $增长的情况下,拟议方法的超额风险达到了以下比率$ n ^ {-2 /(2 + p)} $ $ p \ in(0,2)$和$ n ^ {-1 / p} $ for $ p> 2 $,其中$ n $是样本大小。此外,对于$ p \ in(0,2)$,在特定模型下,超额风险率与函数估计回归问题的极小极大风险的行为匹配。由此得出的结论是,在指定的模型(最小最大风险)和在错误指定的模型(最小最大后悔)中,统计估计的速率在方案p \ in(0,2)$中是等效的。换句话说,对于$ p \ in(0,2)$,统计学习问题与统计估计问题享有相同的minimax率;相反,对于$ p> 2 $,我们表明minimax的比率令人遗憾通常,其风险要比最小风险最大。我们的预言不等式也暗示了维普尼克-切尔文嫩基斯维类型为$ v $的$ v \ log(n / v)/ n $比率,而该类没有通常的凸性假设;我们证明这些速率是最佳的。最后,对于稍微修改的方法,我们得出了$ s $稀疏凸聚集的超额风险的界线,从而改善了Lounici的超额风险[Math.Methods Statist。 16(2007)246-259],并提供最佳汇率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号